In phase I of statistical process control (SPC), control charts are often used as outlier detection methods to assess process stability. Many of these methods require estimation of the covariance matrix, are computationally infeasible, or have not been studied when the dimension of the data, , is large. We propose the one-class peeling (OCP) method, a flexible framework that combines statistical and machine learning methods to detect multiple outliers in multivariate data. The OCP method can be applied to phase I of SPC, does not require covariance estimation, and is well suited to high-dimensional data sets with a high percentage of outliers. Our empirical evaluation suggests that the OCP method performs well in high dimensions and is computationally more efficient and robust than existing methodologies. We motivate and illustrate the use of the OCP method in a phase I SPC application on a , dimensional data set containing Wikipedia search results for National Football League (NFL) players, teams, coaches, and managers. The example data set and R functions, OCP.R and OCPLimit.R, to compute the respective OCP distances and thresholds are available in the supplementary materials. 相似文献
With the development of parallel computing architectures, larger and more complex finite element analyses (FEA) are being performed with higher accuracy and smaller execution times. Graphics processing units (GPUs) are one of the major contributors of this computational breakthrough. This work presents a three-stage GPU-based FEA matrix generation strategy with the key idea of decoupling the computation of global matrix indices and values by use of a novel data structure referred to as the neighbor matrix. The first stage computes the neighbor matrix on the GPU based on the unstructured mesh. Using this neighbor matrix, the indices and values of the global matrix are computed separately in the second and third stages. The neighbor matrix is computed for three different element types. Two versions for performing numerical integration and assembly in the same or separate kernels are implemented and simulations are run for different mesh sizes having up to three million degrees of freedom on a single GPU. Comparison with GPU-based parallel implementation from the literature reveals speedup ranging from 4× to 6× for the proposed workload division strategy. Furthermore, the same kernel implementation is found to outperform the separate kernel implementation by 70% to 150% for different element types. 相似文献
The detection of alcoholism is of great importance due to its effects on individuals and society. Automatic alcoholism detection system (AADS) based on electroencephalogram (EEG) signals is effective, but the design of a robust AADS is a challenging problem. AADS’ current designs are based on conventional, hand-engineered methods and restricted performance. Driven by the excellent deep learning (DL) success in many recognition tasks, we implement an AAD system based on EEG signals using DL. A DL model requires huge number of learnable parameters and also needs a large dataset of EEG signals for training which is not easy to obtain for the AAD problem. In order to solve this problem, we propose a multi-channel Pyramidal neural convolutional (MP-CNN) network that requires a less number of learnable parameters. Using the deep CNN model, we build an AAD system to detect from EEG signal segments whether the subject is alcoholic or normal. We validate the robustness and effectiveness of proposed AADS using KDD, a benchmark dataset for alcoholism detection problem. In order to find the brain region that contributes significant role in AAD, we investigated the effects of selected 19 EEG channels (SC-19), those from the whole brain (ALL-61), and 05 brain regions, i.e., TEMP, OCCIP, CENT, FRONT, and PERI. The results show that SC-19 contributes significant role in AAD with the accuracy of 100%. The comparison reveals that the state-of-the-art systems are outperformed by the AADS. The proposed AADS will be useful in medical diagnosis research and health care systems. 相似文献
Body condition score (BCS) is a common tool for indirectly estimating the mobilization of energy reserves in the fat and muscle of cattle that meets the requirements of animal welfare and precision livestock farming for the effective monitoring of individual animals. However, previous studies on automatic BCS systems have used manual scoring for data collection, and traditional image extraction methods have limited model performance accuracy. In addition, the radio frequency identification device system commonly used in ranching has the disadvantages of misreadings and damage to bovine bodies. Therefore, the aim of this research was to develop and validate an automatic system for identifying individuals and assessing BCS using a deep learning framework. This work developed a linear regression model of BCS using ultrasound backfat thickness to determine BCS for training sets and tested a system based on convolutional neural networks with 3 channels, including depth, gray, and phase congruency, to analyze the back images of 686 cows. After we performed an analysis of image model performance, online verification was used to evaluate the accuracy and precision of the system. The results showed that the selected linear regression model had a high coefficient of determination value (0.976), and the correlation coefficient between manual BCS and ultrasonic BCS was 0.94. Although the overall accuracy of the BCS estimations was high (0.45, 0.77, and 0.98 within 0, 0.25, and 0.5 unit, respectively), the validation for actual BCS ranging from 3.25 to 3.5 was weak (the F1 scores were only 0.6 and 0.57, respectively, within the 0.25-unit range). Overall, individual identification and BCS assessment performed well in the online measurement, with accuracies of 0.937 and 0.409, respectively. A system for individual identification and BCS assessment was developed, and a convolutional neural network using depth, gray, and phase congruency channels to interpret image features exhibited advantages for monitoring thin cows. 相似文献
Deep convolutional neural networks (DCNNs) have shown outstanding performance in the fields of computer vision, natural language processing, and complex system analysis. With the improvement of performance with deeper layers, DCNNs incur higher computational complexity and larger storage requirement, making it extremely difficult to deploy DCNNs on resource-limited embedded systems (such as mobile devices or Internet of Things devices). Network quantization efficiently reduces storage space required by DCNNs. However, the performance of DCNNs often drops rapidly as the quantization bit reduces. In this article, we propose a space efficient quantization scheme which uses eight or less bits to represent the original 32-bit weights. We adopt singular value decomposition (SVD) method to decrease the parameter size of fully-connected layers for further compression. Additionally, we propose a weight clipping method based on dynamic boundary to improve the performance when using lower precision. Experimental results demonstrate that our approach can achieve up to approximately 14x compression while preserving almost the same accuracy compared with the full-precision models. The proposed weight clipping method can also significantly improve the performance of DCNNs when lower precision is required.
For the target detection task,there are two problems in the one-stage network structure of the deep neural network model.First,whether the design of the anchor box hyperparameter is suitable or not will affect the training results of the whole network;second,a large down sampling factor will affect the positioning ability of the target.To solve these problems,this paper proposes a multi-location enhancement network.The structure of the one-stage network model is redesigned,and a better scheme for selecting the super parameters of the anchor frame is proposed.So the efficiency of the first stage network is ensured and the positioning accuracy is better than the previous one.A large number of experiments show that the multi-location enhancement network can achieve a higher positioning accuracy while ensuring real-time performance.The average accuracy of 82.5 is achieved on the public dataset (Pascal VOC 2007). 相似文献